Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 1, 2026
-
Free, publicly-accessible full text available July 29, 2026
-
Free, publicly-accessible full text available July 1, 2026
-
Given the enormous output and pace of development of artificial intelligence (AI) methods in medical imaging, it can be challenging to identify the true success stories to determine the state-of-the-art of the field. This report seeks to provide the magnetic resonance imaging (MRI) community with an initial guide into the major areas in which the methods of AI are contributing to MRI in oncology. After a general introduction to artificial intelligence, we proceed to discuss the successes and current limitations of AI in MRI when used for image acquisition, reconstruction, registration, and segmentation, as well as its utility for assisting in diagnostic and prognostic settings. Within each section, we attempt to present a balanced summary by first presenting common techniques, state of readiness, current clinical needs, and barriers to practical deployment in the clinical setting. We conclude by presenting areas in which new advances must be realized to address questions regarding generalizability, quality assurance and control, and uncertainty quantification when applying MRI to cancer to maintain patient safety and practical utility.more » « lessFree, publicly-accessible full text available April 9, 2026
-
Xu, Jinbo (Ed.)Abstract Motivation Motions of transmembrane receptors on cancer cell surfaces can reveal biophysical features of the cancer cells, thus providing a method for characterizing cancer cell phenotypes. While conventional analysis of receptor motions in the cell membrane mostly relies on the mean-squared displacement plots, much information is lost when producing these plots from the trajectories. Here we employ deep learning to classify breast cancer cell types based on the trajectories of epidermal growth factor receptor (EGFR). Our model is an artificial neural network trained on the EGFR motions acquired from six breast cancer cell lines of varying invasiveness and receptor status: MCF7 (hormone receptor positive), BT474 (HER2-positive), SKBR3 (HER2-positive), MDA-MB-468 (triple negative, TN), MDA-MB-231 (TN) and BT549 (TN). Results The model successfully classified the trajectories within individual cell lines with 83% accuracy and predicted receptor status with 85% accuracy. To further validate the method, epithelial–mesenchymal transition (EMT) was induced in benign MCF10A cells, noninvasive MCF7 cancer cells and highly invasive MDA-MB-231 cancer cells, and EGFR trajectories from these cells were tested. As expected, after EMT induction, both MCF10A and MCF7 cells showed higher rates of classification as TN cells, but not the MDA-MB-231 cells. Whereas deep learning-based cancer cell classifications are primarily based on the optical transmission images of cell morphology and the fluorescence images of cell organelles or cytoskeletal structures, here we demonstrated an alternative way to classify cancer cells using a dynamic, biophysical feature that is readily accessible. Availability and implementation A python implementation of deep learning-based classification can be found at https://github.com/soonwoohong/Deep-learning-for-EGFR-trajectory-classification. Supplementary information Supplementary data are available at Bioinformatics online.more » « less
-
Abstract Fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to quantify molecular compositions and study molecular states in complex cellular environment as the lifetime readings are not biased by fluorophore concentration or excitation power. However, the current methods to generate FLIM images are either computationally intensive or unreliable when the number of photons acquired at each pixel is low. Here we introduce a new deep learning-based method termedflimGANE(fluorescencelifetimeimaging based onGenerativeAdversarialNetworkEstimation) that can rapidly generate accurate and high-quality FLIM images even in the photon-starved conditions. We demonstrated our model is up to 2,800 times faster than the gold standard time-domain maximum likelihood estimation (TD_MLE) and thatflimGANEprovides a more accurate analysis of low-photon-count histograms in barcode identification, cellular structure visualization, Förster resonance energy transfer characterization, and metabolic state analysis in live cells. With its advantages in speed and reliability,flimGANEis particularly useful in fundamental biological research and clinical applications, where high-speed analysis is critical.more » « less
An official website of the United States government
